-
Notifications
You must be signed in to change notification settings - Fork 263
Add support for WORKFLOW and user/group IDs to assemble job #416
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The USER_ID
and GROUP_ID
in the Dockerfile are -as far as I understand- just for the release manager to have permissions on the artifacts produced by make.sh assemble
. Hopefully it doesn't affect anything else using the Dockerfile, but Sylvain will know better.
I can't comment on the version.txt change, but I think Sylvain was going to integrate this with automating the bump action.
The generated artifacts look like they're ok, the DRA artifact set will look for a maven dir in .ci/output/release
. So I guess the pom
file and the rest of the files are enough and no generation of a compressed file is needed, but yeah Sylvain will know better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes to uid/gid look good.
Wondering about the version though: in the context of a DRA build, what is the version when make.sh assemble <version>
is called? Where does it come from?
The Gradle build uses this value when set, and otherwise falls back to the version in config/version.txt
.
I've added a commit that
This is what we provide currently to the release manager. If the DRA releaser behaves the same, that should be all that is needed. And indeed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM with my additional commit ;-)
Do not pay attention to the failing "Code style and license headers" check, it'll be fixed in another PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The extra changes LGTM after testing locally. I especially enjoy the simple implementation of the bump task :) Once this is merged I'll update the version.txt
to 8.5.0
in the backport as well.
The backport to
To backport manually, run these commands in your terminal: # Fetch latest updates from GitHub
git fetch
# Create a new working tree
git worktree add .worktrees/backport-8.5 8.5
# Navigate to the new working tree
cd .worktrees/backport-8.5
# Create a new branch
git switch --create backport-416-to-8.5
# Cherry-pick the merged commit of this pull request and resolve the conflicts
git cherry-pick --mainline 1 8c212ed80157c91550862359285be882c0b9b410
# Push it to GitHub
git push --set-upstream origin backport-416-to-8.5
# Go back to the original working tree
cd ../..
# Delete the working tree
git worktree remove .worktrees/backport-8.5 Then, create a pull request where the |
Should we backport to 7.17 or will it stay on Unified Release Manager? |
Yeah that's the plan! We're waiting to merge the rest of the backports until we get a successful status on the |
Adapted elastic/elasticsearch-py#2021 and elastic/elasticsearch-py#2018 to elasticsearch-java as a starting point for DRA support.
Things I changed and outstanding questions:
USER_ID
andGROUP_ID
toBUILDER_UID
andBUILD_GID
to match other Dockerfiles. I'm not sure if these had any effect somewhere hidden, maybe something with Gradle?version.txt
, when this is backported to 8.5 branch this value will need to be updated. I'm also not certain if this has impacts on the client?.ci/output
. In particular we aren't doing anything with.ci/output/repository/
and I'm not sure if we need to be?Tested with the following to emulate how assemble would behave when run under DRA:
Snapshot
Staging